Goto

Collaborating Authors

 independent subspace principal component analysis


Disentangling Interpretable Factors with Supervised Independent Subspace Principal Component Analysis

Neural Information Processing Systems

The success of machine learning models relies heavily on effectively representing high-dimensional data. However, ensuring data representations capture human-understandable concepts remains difficult, often requiring the incorporation of prior knowledge and decomposition of data into multiple subspaces. Traditional linear methods fall short in modeling more than one space, while more expressive deep learning approaches lack interpretability. Here, we introduce Supervised Independent Subspace Principal Component Analysis ( \texttt{sisPCA}), a PCA extension designed for multi-subspace learning. Leveraging the Hilbert-Schmidt Independence Criterion (HSIC), \texttt{sisPCA} incorporates supervision and simultaneously ensures subspace disentanglement.